Goto

Collaborating Authors

 machine intelligence


At CES 2026, Everything Is AI. What Matters Is How You Use It

WIRED

At CES 2026, Everything Is AI. Integrated chatbots and built-in machine intelligence are no longer standout features in consumer tech. If companies want to win in the AI era, they've got to hone the user experience. The New Year's Eve champagne isn't even warm yet, and CES week is already upon us. The giant annual celebration of consumer tech kicks off the first full week of January as companies across the world convene in Las Vegas to hawk their latest innovations.


Evolutionary System 2 Reasoning: An Empirical Proof

Ma, Zeyuan, Huang, Wenqi, Song, Guo-Huan, Guo, Hongshu, Ma, Sijie, Cao, Zhiguang, Gong, Yue-Jiao

arXiv.org Artificial Intelligence

Machine intelligence marks the ultimate dream of making machines' intelligence comparable to human beings. While recent progress in Large Language Models (LLMs) show substantial specific skills for a wide array of downstream tasks, they more or less fall shorts in general intelligence. Following correlation between intelligence and system 2 reasoning (slow thinking), in this paper, we aim to answering a worthwhile research question: could machine intelligence such as LLMs be evolved to acquire reasoning ability (not specific skill) just like our human beings? To this end, we propose evolutionary reasoning optimization (ERO) framework which performs survival of the fittest over a population of LLMs to search for individual with strong reasoning ability. Given a reasoning task, ERO first initializes multiple LLMs as a population, after which an evolutionary strategy evolves the population to maximize quantified reasoning score of the best individual. Based on experiments on representative testsuites, we claim two surprising empirical discoveries: i) the latest LLMs such as GPT-5 still show limited system 2 reasoning ability; ii) with simple evolution-loop of ERO, a relatively weak model (Qwen-7B) could be enhanced to emerge powerful reasoning ability. Our project can be accessed at https://github.com/MetaEvo/ERO for reproduction needs.


In Defense of the Turing Test and its Legacy

Gonçalves, Bernardo

arXiv.org Artificial Intelligence

Considering that Turing's original test was co-opted by Weizenbaum and that six of the most common criticisms of the Turing test are unfair to both Turing's argument and the historical development of AI. The Turing test has faced criticism for decades, most recently at the Royal Society event "Celebrating the 75th Anniversary of the Turing Test." The question of the Turing test's significance has intensified with recent advances in large language model technology, which now enable machines to pass it. In this article, I address six of the most common criticisms of the Turing test: The Turing test encourages fooling people; Turing overestimated human intelligence, as people can be easily fooled (the ELIZA effect); The Turing test is not a good benchmark for AI; Turing's 1950 paper is not serious and/or has contradictions; Imitation should not be a goal for AI, and it is also harmful to society; Passing the Turing test teaches nothing about AI. All six criticisms largely derive from Joseph Weizenbaum's influential reinterpretation of the Turing test. The first four fail to withstand a close examination of the internal logic of Turing's 1950 paper, particularly when the paper is situated within its mid-twentieth-century context.


Why an AI 'godfather' is quitting Meta after 12 years

BBC News

Why an AI'godfather' is quitting Meta after 12 years Just a couple of weeks ago, one of the godfathers of artificial intelligence was in St James's Palace being handed an award from King Charles for his work in artificial intelligence (AI). Professor Yann LeCun was being honoured along with six other recipients for his contributions to the field, which have been credited as advancing deep learning. But Mr LeCun is at odds with some of the AI world over the future of the generation-defining technology. And now he is going all-in on his idea of advanced machine intelligence after announcing he is leaving his role as Meta's chief AI scientist to start a new firm. During his 12 years at the company, Prof LeCun won the prestigious Turing Award and witnessed several flurries of excitement around AI - not least the most recent boom in generative AI accelerated by rival OpenAI's launch of ChatGPT in late 2022.


From AI for Science to Agentic Science: A Survey on Autonomous Scientific Discovery

Wei, Jiaqi, Yang, Yuejin, Zhang, Xiang, Chen, Yuhan, Zhuang, Xiang, Gao, Zhangyang, Zhou, Dongzhan, Wang, Guangshuai, Gao, Zhiqiang, Cao, Juntai, Qiu, Zijie, Hu, Ming, Ma, Chenglong, Tang, Shixiang, He, Junjun, Song, Chunfeng, He, Xuming, Zhang, Qiang, You, Chenyu, Zheng, Shuangjia, Ding, Ning, Ouyang, Wanli, Dong, Nanqing, Cheng, Yu, Sun, Siqi, Bai, Lei, Zhou, Bowen

arXiv.org Artificial Intelligence

Artificial intelligence (AI) is reshaping scientific discovery, evolving from specialized computational tools into autonomous research partners. We position Agentic Science as a pivotal stage within the broader AI for Science paradigm, where AI systems progress from partial assistance to full scientific agency. Enabled by large language models (LLMs), multimodal systems, and integrated research platforms, agentic AI shows capabilities in hypothesis generation, experimental design, execution, analysis, and iterative refinement -- behaviors once regarded as uniquely human. This survey provides a domain-oriented review of autonomous scientific discovery across life sciences, chemistry, materials science, and physics. We unify three previously fragmented perspectives -- process-oriented, autonomy-oriented, and mechanism-oriented -- through a comprehensive framework that connects foundational capabilities, core processes, and domain-specific realizations. Building on this framework, we (i) trace the evolution of AI for Science, (ii) identify five core capabilities underpinning scientific agency, (iii) model discovery as a dynamic four-stage workflow, (iv) review applications across the above domains, and (v) synthesize key challenges and future opportunities. This work establishes a domain-oriented synthesis of autonomous scientific discovery and positions Agentic Science as a structured paradigm for advancing AI-driven research.


At TIME100 Impact Dinner, AI Leaders Raise a Glass to Centering Humanity

TIME - Tech

The event celebrates the third annual TIME100 AI list, which highlights the 100 most influential people in AI. This year's list includes 84 new honorees--a testament to the dynamism of the field--with those selected ranging in age from 15 to nearly 80. The aim of the TIME list is to show how it is people, not machines, that will determine the direction of AI, and honorees were drawn from every angle of the discipline. The event culminated in four toasts delivered by 2025 TIME100 AI honorees, who highlighted the importance of guiding AI responsibly, including with regulation; protecting human creativity; and fostering collaboration between human and machine intelligence. Stuart Russell, professor of computer science at the University of California, Berkeley, and co-founder of the International Association for Safe and Ethical AI (IASEAI), delivered the first toast--a provocative call to make wise choices about how we use AI, given the high existential stakes involved.


ChatGPT passed the Turing Test. Now what?

Popular Science

ChatGPT passed the Turing Test. The AI fooled 73% of people into thinking it was human, raising new questions about machine intelligence. As artificial intelligence gets better and better, people face machines that look--and act--surprisingly human. Breakthroughs, discoveries, and DIY tips sent every weekday. It seems that every day brings a new headline about the burgeoning capabilities of large language models (LLMs) like ChatGPT and Google's Gemini--headlines that are either exciting or increasingly apocalyptic, depending on one's point of view. One particularly striking story arrived earlier this year: a paper that described how an LLM had passed the Turing Test, an experiment devised in the 1950s by computer science pioneer Alan Turing to determine whether machine intelligence could be distinguished from that of a human. The LLM in question was ChatGPT 4.5, and the paper found that it had been strikingly successful in fooling people into thinking it was human: In an experiment where participants were asked to choose whether the chatbot or an actual human was the real person, nearly three of the four chose the former.


Apple's Best New iOS 26 Feature Has Been on Pixel Phones for Years

WIRED

Apple's Best New iOS 26 Feature Has Been on Pixel Phones for Years The iPhone's new software screens your calls using machine intelligence. Neat, but Google had the feature first--just like so many other features that rely on AI to work. Call Screening on an iPhone. Ever since I was a child, I've despised answering the phone when an unknown number calls. Who could be on the other end?


Rethinking Data Protection in the (Generative) Artificial Intelligence Era

Li, Yiming, Shao, Shuo, He, Yu, Guo, Junfeng, Zhang, Tianwei, Qin, Zhan, Chen, Pin-Yu, Backes, Michael, Torr, Philip, Tao, Dacheng, Ren, Kui

arXiv.org Artificial Intelligence

The (generative) artificial intelligence (AI) era has profoundly reshaped the meaning and value of data. No longer confined to static content, data now permeates every stage of the AI lifecycle from the training samples that shape model parameters to the prompts and outputs that drive real-world model deployment. This shift renders traditional notions of data protection insufficient, while the boundaries of what needs safeguarding remain poorly defined. Failing to safeguard data in AI systems can inflict societal and individual, underscoring the urgent need to clearly delineate the scope of and rigorously enforce data protection. In this perspective, we propose a four-level taxonomy, including non-usability, privacy preservation, traceability, and deletability, that captures the diverse protection needs arising in modern (generative) AI models and systems. Our framework offers a structured understanding of the trade-offs between data utility and control, spanning the entire AI pipeline, including training datasets, model weights, system prompts, and AI-generated content. We analyze representative technical approaches at each level and reveal regulatory blind spots that leave critical assets exposed. By offering a structured lens to align future AI technologies and governance with trustworthy data practices, we underscore the urgency of rethinking data protection for modern AI techniques and provide timely guidance for developers, researchers, and regulators alike.


Future progress in artificial intelligence: A survey of expert opinion

Müller, Vincent C., Bostrom, Nick

arXiv.org Artificial Intelligence

There is, in some quarters, concern about high-level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity. In other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high-level machine intelligence coming up within a particular time-frame, which risks they see with that development, and how fast they see these developing. We thus designed a brief questionnaire and distributed it to four groups of experts in 2012/2013. The median estimate of respondents was for a one in two chance that high-level machine intelligence will be developed around 2040-2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter. They estimate the chance is about one in three that this development turns out to be 'bad' or 'extremely bad' for humanity.